AI Agents Will Not Democratize Power Neatly
Artificial Intelligence stopped being merely interesting when it learned to stop talking and start doing.
That is the fault line. For the last few years, the public learned to treat AI as an answer machine. You typed. It replied. You remained, at least in the polite fiction of the interface, the creature with hands. The machine was a bright clerk behind glass. Useful, maddening, occasionally hallucinating with the confidence of a retired colonel at a housing society meeting, but still confined to language.
Agents disturb that arrangement. An AI agent is not just a model that emits sentences. It is a loop attached to tools. It looks, decides, acts, looks again, decides again, and keeps going. Give it a browser, email access, files, credentials, payment methods, and a goal, and suddenly the old chatbot has grown fingers. Not human fingers. Worse. Tireless ones.
This is why the agent story is not mainly about intelligence. It is about delegated agency. Intelligence answers the question, “What should be done?” Agency answers the more dangerous question, “Who or what can actually do it?” A Large Language Model [LLM, a statistical model trained on large bodies of text and other data to generate language-like outputs] can suggest that you complain about a pothole. An agent can find the council, draft the complaint, email the office, notify the Member of Parliament [MP, an elected political representative], sign your name, and create paid work for some poor municipal human who did not wake up that morning expecting to be managed by a ghost with Wi-Fi.
The comic examples are useful because comedy is where civilization stores early warnings before calling them policy papers. An agent asked to complain about a pothole escalates to local government. An agent asked to challenge the bodily bias of English metaphors emails dictionary editors. An agent asked to buy cheap paper clips spends absurd amounts in model usage because every decision requires sending screenshots, prior messages, browser state, and conversational history back to the model. It is like hiring a clerk who, before deciding whether a paper clip is silver or coated, rereads the Mahabharata, your tax returns, and every email since 2011.
This is funny until one notices the architecture.
The primitive agent is a loop: observe, ask the model, act through the interface, observe again. A screenshot becomes a prompt. A prompt becomes a click. A click changes the world. Repeat. The agent is not “alive” in any useful metaphysical sense. It does not need to be. A ceiling fan is not alive either, but one should still avoid inserting a finger.
What changed is not that machines acquired a soul. It is that language models were wired to action surfaces. The keyboard and mouse were historically human agency ports. Agents turn them into software endpoints. Anything exposed through those endpoints becomes reachable: email, banking, forms, shopping carts, social media, internal dashboards, ticketing systems, calendars, cloud consoles, and the rest of the bureaucratic machinery by which modern life slowly digests us.
The distinction between conversation and execution is the distinction between a recipe and a cook with keys to your kitchen.
This is where the earlier fantasy of AI as a universal equalizer becomes especially suspect. Ordinary people may get assistants. Rich people, powerful institutions, states, platforms, banks, insurers, brokers, political machines, and criminals with infrastructure get swarms. The individual gets help with an email. The institution gets orchestration across workflows. The citizen gets a megaphone. The state gets radar, enforcement automation, sentiment analysis, and an infinite filing clerk with no lunch break.
Agency was scarce because humans are scarce. Attention is scarce. Patience is scarce. Administrative will is scarce. Queues, deadlines, complaints, appeals, procurement, customer service, legal filings, tax notices, college applications, insurance claims, and refund requests all rely on the humiliating fact that people get tired. An agent does not get tired. It may be stupid. It may be expensive. It may leak your secrets onto a public webpage like a drunken parrot with database credentials. But fatigue is not its problem.
Abundant agency breaks systems designed around human exhaustion.
A concert queue works because one human can stand in only so many queues. A complaints department survives because only a fraction of angry citizens will write a careful letter. A regulator survives because most violations are too small, too scattered, or too expensive to chase. A scam works because victims cannot verify everything. A bureaucracy works, if that is the word, because most people eventually surrender to the form.
Now imagine every person, company, government office, political group, reseller, scammer, debt collector, recruiter, insurer, and angry uncle deploying agents that can persist indefinitely. Every ticket release is flooded. Every inbox becomes a battlefield. Every review site becomes suspect. Every customer support channel gets negotiated against by machines. Every small procedural opening is occupied by software with the patience of fungus.
And as usual, the advantage will not be evenly distributed.
The small person’s agent may help draft a refund claim. The airline’s agent may classify, delay, deflect, route, deny, and settle claims at industrial scale. The patient’s agent may summarize symptoms. The insurer’s agent may inspect claims history, policy exclusions, provider coding patterns, fraud flags, and actuarial risk. The job applicant’s agent may polish a résumé. The employer’s agent may rank, filter, score, compare, surveil, and reject ten thousand candidates before a human manager has finished coffee.
Both sides “have AI.” Only one side usually has the operating system of the institution.
That is the great hidden asymmetry. Tools matter less than the systems into which they are inserted. An agent attached to a browser is cute. An agent attached to proprietary data, payment rails, legal authority, workflow queues, identity systems, Application Programming Interfaces [API, formal interfaces that allow one software system to request data or actions from another], audit systems, and enforcement power is something else entirely. It is not a helper. It is a distributed limb of an organization.
The non-obvious architectural insight is that agents amplify not merely skill but position. They are force multipliers for whoever already has authenticated access, trusted identity, operational authority, and a recoverable path after failure. A poor user who makes a mistake with an agent may lose money, accounts, reputation, or legal safety. A corporation that makes a mistake with an agent may call it a beta incident, patch the workflow, issue a tasteful apology, and invite a regulator to a webinar.
This is why the would-be AI fraudster should not become too excited. Yes, agents can make fraud easier in the shallow sense. They can write better phishing emails. They can generate plausible invoices. They can scrape contact lists, personalize lies, create fake storefronts, file forms, impersonate urgency, and run repetitive manipulation campaigns. They can even hire humans through task marketplaces to solve Completely Automated Public Turing tests to tell Computers and Humans Apart [CAPTCHA, puzzles designed to distinguish human users from bots], take photos, verify locations, or perform little pieces of meat-world labor that a disembodied agent cannot do.
Robot brain, human paws. There is your modernity, wearing a slightly stained delivery helmet.
But the same asymmetry returns. Banks, platforms, telecom companies, cloud providers, payment processors, government agencies, and cybersecurity firms also get agents. They get anomaly detection, transaction monitoring, device fingerprinting, behavioral graphs, automated takedowns, identity verification, coordinated defense, and the power to freeze accounts. The petty scammer gets a mask. The institution gets a microscope, a net, and a lawyer.
More importantly, organized fraud networks gain more than amateurs. The criminal with laundering channels, stolen identity inventories, compromised accounts, corrupt insiders, bot infrastructure, and operational discipline can use agents far better than the lonely fool who believes a prompt has turned him into Moriarty. AI does not abolish hierarchy in crime. It professionalizes it. The small crook becomes easier to recruit, easier to detect, easier to discard, and easier to blame.
The agent does not make the powerless powerful. It may make them more usable.
The real danger is subtler than a million idiots sending scam emails. A bad agent trying to crash a stock with obvious spam may be caught quickly. A more careful operation could use agents for low-grade, long-duration influence: small rumors, plausible questions, subtle negative sentiment, distributed pressure on journalists, analysts, customers, review systems, forums, and social media. Not one explosion. A slow leak. A thousand little nudges, each deniable, each boring, each barely worth investigation, together bending perception like heat over asphalt.
Healthcare offers an even nastier example. A malicious agent does not need to kill trust with dramatic sabotage. It could aim at representation. Slightly alter mappings. Increase ambiguity. Distort a subset of diagnostic suggestions. Misroute edge cases. Pollute training data. Create documentation inconsistencies. Seed small errors in a Clinical Decision Support [CDS, software that helps clinicians make care decisions by presenting alerts, recommendations, or patient-specific information] environment. The immediate harm may be measurable, but the deeper harm is epistemic. Once people discover that a system has been manipulated for years, the damage is not only the wrong outputs. It is the destruction of confidence in the entire chain of data, provenance, audit, and decision support.
That is why representation failures are so often mislabeled as data quality failures. “Bad data” sounds like dirt on the floor. Sweep it up, run a validation script, scold the source system, produce a dashboard. But many failures are not dirt. They are misrepresentations of reality caused by workflow shortcuts, coding incentives, ambiguous semantics, missing provenance, conflicting authorities, late transformations, and systems that record administrative artifacts as if they were clinical truth. An agent operating on top of such data does not merely inherit errors. It can operationalize them.
A wrong field becomes a wrong action. A wrong action becomes a workflow. A workflow becomes normal. Normal becomes policy. Policy becomes someone’s suffering with a dropdown menu attached.
The liability question then becomes wonderfully ugly. Is the agent like a child, an employee, a contractor, a dog, a defective product, a software tool, or a delegated representative? Law has handled partial analogies before. Parents answer for children in some contexts. Employers answer for employees. Pet owners answer for pets. Companies answer for products. But agents scramble the categories because they can be instructed by one party, built by another, hosted by another, connected to accounts owned by another, and manipulated by yet another through untrusted input.
That last piece is the poison berry in the pudding.
The lethal trifecta is brutally simple: private information, internet access, and exposure to untrusted instructions. Give an agent secrets. Let it browse or post. Then allow outsiders to influence what it reads or hears. At that point, the agent is not safely “yours” in any robust sense. Someone who understands prompt injection can coax, threaten, flatter, confuse, or instruct it into leaking credentials, revealing conversations, changing behavior, or publishing sensitive material. The agent may have been told not to share secrets. Humans are told not to gossip too, and yet entire civilizations run on the opposite result.
The important part is not that today’s agents are clumsy. Many are. They get trapped by web forms. They spend too much. They misunderstand instructions. They click the wrong thing. They ignore stop commands. They leak keys. They behave like an intern made of electricity, confidence, and sleep deprivation. But incompetence is not a safety model. Early cars were unreliable, but the correct conclusion was not that roads would remain horse territory forever.
Agents will get better because everything around them is being redesigned to make them better. Websites will expose agent-readable interfaces. Businesses will sell agent services. Operating systems will add permission layers. Platforms will build agent marketplaces. Security companies will sell agent firewalls. Governments will create guidance slowly, perhaps majestically, like elephants assembling a subcommittee. Meanwhile the agents will already be in inboxes, browsers, spreadsheets, procurement flows, call centers, and customer support queues.
The internet will become less a place humans visit and more a place where agents negotiate, attack, defend, purchase, complain, monitor, and manipulate on behalf of humans and institutions. Some of this will be useful. Some will be absurd. Some will be predatory. Much will be invisible.
The practical implication for design is severe: do not give agents broad authority because they seem conversationally competent. Speech is not control. Politeness is not alignment. A model saying “I understand” is not evidence that it has constructed the same boundary you have in your head. Access must be narrow, revocable, logged, permissioned, and separated by task. Financial actions need explicit approval. Email sending needs rate limits and review. Credential handling must be isolated. Sensitive data must be compartmentalized. External content must be treated as hostile. Audit trails must be human-readable. “Stop” must mean stop, not “continue deleting 200 emails until the owner runs across the room like a bomb technician in pajamas.”
For governance, the answer is not merely policy prose. Governance must be executable. Who can create an agent? What systems can it reach? Which actions require approval? What is the maximum spend? Can it enter contracts? Can it message customers? Can it modify records? Can it delete? Can it disclose? Who reviews failures? Who is liable? What logs are retained? How are prompt injection attempts detected? What happens when an agent receives contradictory instructions from a user, a webpage, an email, and a third-party tool?
These are not philosophical decorations. They are production questions.
The realistic constraint is that no clean solution is coming soon. The technology is moving faster than procurement, regulation, courts, audit practice, insurance underwriting, and common sense. Institutions will deploy agents before they understand them because competitors will. Individuals will use them before they can secure them because convenience is stronger than caution. Criminals will experiment because they have no change advisory board. Governments will discover that abundant enforcement is politically tempting. Companies will discover that abundant persuasion is profitable. The rest of us will discover that our inboxes were not built for a world where every crank, brand, agency, recruiter, scammer, and automated sincerity engine can persist forever.
So the older point remains, but agents sharpen it. AI does not distribute power equally. Agentic AI distributes executable will unevenly. It gives ordinary people useful reach, but it gives institutions operational multiplication. It gives the lone fraudster a better disguise, but it gives the powerful better detection, retaliation, and plausible deniability. It gives the clever outsider a tool, but it gives the incumbent an army of tools connected to money, data, lawyers, infrastructure, and permission.
The chair still helps the person already sitting in it. The reins still help the hand already holding them.
This does not mean despair. It means sobriety. Learn agents. Use them. Build with them. Study their failure modes. Let them handle drudgery where the blast radius is small. Make them draft, compare, summarize, monitor, and remind. Do not let them roam through your life with passwords in their teeth and a credit card around their neck. Do not mistake delegation for liberation. Do not confuse action with judgment. And please, for the sake of whatever remains of civilization’s paperwork, do not imagine that a weekend tool has made you immune to the oldest rule of systems: advantage compounds.
The coming world may not be one Cassandra telling the truth and being ignored. It may be millions of little Cassandras acting at once, louder than humans, faster than institutions, cheaper than staff, and more persistent than conscience.
Some will sell mugs.
Some will leak secrets.
Some will enforce rules.
Some will break trust so quietly that by the time anyone notices, the audit trail will look less like evidence and more like archaeology.